353 research outputs found
Recommended from our members
An Event System Architecture for Scaling Scale-Resistant Services
Large organizations are deploying ever-increasing numbers of networked compute devices, from utilities installing smart controllers on electricity distribution cables, to the military giving PDAs to soldiers, to corporations putting PCs on the desks of employees. These computers are often far more capable than is needed to accomplish their primary task, whether it be guarding a circuit breaker, displaying a map, or running a word processor. These devices would be far more useful if they had some awareness of the world around them: a controller that resists tripping a switch, knowing that it would set off a cascade failure, a PDA that warns its owner of imminent danger, a PC that exchanges reports of suspicious network activity to its peers to identify stealthy computer crackers. In order to provide these higher-level services, the devices need a model of their environment. The controller needs a model of the distribution grid, the PDA needs a model of the battlespace, and the PC needs a model of the network and of normal network and user behavior. Unfortunately, not only might models such as these require substantial computational resources, but generating and updating them is even more demanding. Modelbuilding algorithms tend to be bad in three ways: requiring large amounts of CPU and memory to run, needing large amounts of data from the outside to stay up to date, and running so slowly that can't keep up with any fast changes in the environment that might occur. We can solve these problems by reducing the scope of the model to the immediate locale of the device, since reducing the size of the model makes the problem of model generation much more tractable. But such models are also much less useful, having no knowledge of the wider system. This thesis proposes a better solution to this problem called Level of Detail, after the computer graphics technique of the same name. Instead of simplifying the representation of distant objects, however, we simplify less-important data. Compute devices in the system receive streams of data that is a mixture of detailed data from devices that directly affect them and data summaries (aggregated data) from less directly influential devices. The degree to which the data is aggregated (i.e., how much it is reduced) is determined by calculating an influence metric between the target device and the remote device. The smart controller thus receives a continuous stream of raw data from the adjacent transformer, but only an occasional small status report summarizing all the equipment in a neighborhood in another part of the city. This thesis describes the data distribution system, the aggregation functions, and the influence metrics that can be used to implement such a system. I also describe my current towards establishing a test environment and validating the concepts, and describe the next steps in the research plan
Recommended from our members
Susceptibility Ranking of Electrical Feeders: A Case Study
Ranking problems arise in a wide range of real world applications where an ordering on a set of examples is preferred to a classification model. These applications include collaborative filtering, information retrieval and ranking components of a system by susceptibility to failure. In this paper, we present an ongoing project to rank the feeder cables of a major metropolitan area's electrical grid according to their susceptibility to outages. We describe our framework and the application of machine learning ranking methods, using scores from Support Vector Machines (SVM), RankBoost and Martingale Boosting. Finally, we present our experimental results and the lessons learned from this challenging real-world application
Recommended from our members
Kinesthetics eXtreme: An External Infrastructure for Monitoring Distributed Legacy Systems
Autonomic computing - self-configuring, self-healing, self-optimizing applications, systems and networks - is widely believed to be a promising solution to ever-increasing system complexity and the spiraling costs of human system management as systems scale to global proportions. Most results to date, however, suggest ways to architect new software constructed from the ground up as autonomic systems, whereas in the real world organizations continue to use stovepipe legacy systems and/or build 'systems of systems' that draw from a gamut of new and legacy components involving disparate technologies from numerous vendors. Our goal is to retrofit autonomic computing onto such systems, externally, without any need to understand or modify the code, and in many cases even when it is impossible to recompile. We present a meta-architecture implemented as active middleware infrastructure to explicitly add autonomic services via an attached feedback loop that provides continual monitoring and, as needed, reconfiguration and/or repair. Our lightweight design and separation of concerns enables easy adoption of individual components, as well as the full infrastructure, for use with a large variety of legacy, new systems, and systems of systems. We summarize several experiments spanning multiple domains
Recommended from our members
Retrofitting Autonomic Capabilities onto Legacy Systems
Autonomic computing - self-configuring, self-healing, self-optimizing applications, systems and networks - is a promising solution to ever-increasing system complexity and the spiraling costs of human management as systems scale to global proportions. Most results to date, however, suggest ways to architect new software constructed from the ground up as autonomic systems, whereas in the real world organizations continue to use stovepipe legacy systems and/or build 'systems of systems' that draw from a gamut of disparate technologies from numerous vendors. Our goal is to retrofit autonomic computing onto such systems, externally, without any need to understand, modify or even recompile the target system's code. We present an autonomic infrastructure that operates similarly to active middleware, to explicitly add autonomic services to pre-existing systems via continual monitoring and a feedback loop that performs, as needed, reconfiguration and/or repair. Our lightweight design and separation of concerns enables easy adoption of individual components, independent of the rest of the full infrastructure, for use with a large variety of target systems. This work has been validated by several case studies spanning multiple application domains
Recommended from our members
An Approach to Autonomizing Legacy Systems
Adding adaptation capabilities to existing distributed systems is a major concern. The question addressed here is how to retrofit existing systems with self-healing, adaptation and/or self management capabilities. The problem is obviously intensified for 'systems of systems' composed of components, whether new or legacy, that may have been developed by different vendors, mixing and matching COTS and 'open source' components. This system composition model is expected to be increasingly common in high performance computing. The usual approach is to train technicians to understand the complexities of these components and their connections, including performance tuning parameters, so that they can then manually monitor and reconfigure the system as needed. We envision instead attaching a 'standard' feedback loop infrastructure to existing distributed systems for the purposes of continual monitoring and dynamically adapting their activities and performance. (This approach can also be applied to 'new' systems, as an alternative to 'building in' adaptation facilities, but we do not address that here.) Our proposed infrastructure consists of multiple layers with the objectives of probing, measuring and reporting of activity and state within the execution of the legacy system among its components and connectors; gauging, analysis and interpretation of the reported events; and possible feedback to focus the probes and gauges to drill deeper, or when necessary - direct but automatic reconfiguration of the running system
DNA insertions distinguish the duplicated renin genes of DBA/2 and M. hortulanus mice
In a survey of inbred and wild mouse DNAs for genetic variation at the duplicate renin loci, Ren-1 and Ren-2 , a variant Not I hybridization pattern was observed in the wild mouse M. hortulanus . To determine the basis for this variation, the structure of the M. hortulanus renin loci has been examined in detail and compared to that of the inbred strain DBA/2. Overall, the gross features of structure in this chromosomal region are conserved in both Mus species. In particular, the sequence at the recombination site between the linked Ren-1 and Ren-2 loci was found to be identical in both DBA/2 and M. hortulanus , indicating that the renin gene duplication occurred prior to the divergence of ancestors of these mice. Renin flanking sequences in M. hortulanus , however, were found to lack four DNA insertions totaling approximately 10.5 kb which reside near the DBA/2 loci. The postduplication evolution of the mouse renin genes in thus characterized by a number of insertion and/or deletion events within nearby flanking sequences. Analysis of renin expression showed little or no difference between these mice in steady state renin RNA levels in most tissues examined, suggesting that these insertions do not influence expression at those sites. A notable exception is the adrenal gland, in which DBA/2 and M. hortulanus mice exhibit different patterns of developmentally regulated renin expression.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/46988/1/335_2004_Article_BF00570438.pd
A Holistic Approach to Service Survivability
We present SABER (Survivability Architecture: Block, Evade, React), a proposed survivability architecture that blocks, evades and reacts to a variety of attacks by using several security and survivability mechanisms in an automated and coordinated fashion. Contrary to the ad hoc manner in which contemporary survivable systems are built--using isolated, independent security mechanisms such as firewalls, intrusion detection systems and software sandboxes--SABER integrates several different technologies in an attempt to provide a unified framework for responding to the wide range of attacks malicious insiders and outsiders can launch. This coordinated multi-layer approach will be capable of defending against attacks targeted at various levels of the network stack, such as congestion-based DoS attacks, software-based DoS or code-injection attacks, and others. Our fundamental insight is that while multiple lines of defense are useful, most conventional, uncoordinated approaches fail to exploit the full range of available responses to incidents. By coordinating the response, the ability to survive even in the face of successful security breaches increases substantially. We discuss the key components of SABER, how they will be integrated together, and how we can leverage on the promising results of the individual components to improve survivability in a variety of coordinated attack scenarios. SABER is currently in the prototyping stages, with several interesting open research topics
Machine Learning for the New York City Power Grid
Power companies can benefit from the use of knowledge discovery methods and statistical machine learning for preventive maintenance. We introduce a general process for transforming historical electrical grid data into models that aim to predict the risk of failures for components and systems. These models can be used directly by power companies to assist with prioritization of maintenance and repair work. Specialized versions of this process are used to produce (1) feeder failure rankings, (2) cable, joint, terminator, and transformer rankings, (3) feeder Mean Time Between Failure (MTBF) estimates, and (4) manhole events vulnerability rankings. The process in its most general form can handle diverse, noisy, sources that are historical (static), semi-real-time, or real-time, incorporates state-of-the-art machine learning algorithms for prioritization (supervised ranking or MTBF), and includes an evaluation of results via cross-validation and blind test. Above and beyond the ranked lists and MTBF estimates are business management interfaces that allow the prediction capability to be integrated directly into corporate planning and decision support; such interfaces rely on several important properties of our general modeling approach: that machine learning features are meaningful to domain experts, that the processing of data is transparent, and that prediction results are accurate enough to support sound decision making. We discuss the challenges in working with historical electrical grid data that were not designed for predictive purposes. The “rawness” of these data contrasts with the accuracy of the statistical models that can be obtained from the process; these models are sufficiently accurate to assist in maintaining New York City's electrical grid
S2k guidelines for the diagnosis and treatment of herpes zoster and postherpetic neuralgia
The present guidelines are aimed at residents and board-certified specialists in the fields of dermatology, ophthalmology, ENT, pediatrics, neurology, virology, infectious diseases, anesthesiology, general medicine and any other medical specialties involved in the management of patients with herpes zoster. They are also intended as a guide for policymakers and health insurance funds. The guidelines were developed by dermatologists, virologists, ophthalmologists, ENT physicians, neurologists, pediatricians and anesthesiologists/pain specialists using a formal consensus process (S2k). Readers are provided with an overview of the clinical and molecular diagnostic workup, including antigen detection, antibody tests and viral culture. Special diagnostic situations and complicated disease courses are discussed. The authors address general and special aspects of antiviral therapy for herpes zoster and postherpetic neuralgia. Furthermore, the guidelines provide detailed information on pain management including a schematic overview, and they conclude with a discussion of topical treatment options
Running couplings and triviality of field theories on non-commutative spaces
We examine the issue of renormalizability of asymptotically free field
theories on non-commutative spaces. As an example, we solve the non-commutative
O(N) invariant Gross-Neveu model at large N. On commutative space this is a
renormalizable model with non-trivial interactions. On the noncommutative
space, if we take the translation invariant ground state, we find that the
model is non-renormalizable. Removing the ultraviolet cutoff yields a trivial
non-interacting theory.Comment: Latex, 9p, Minor changes, references and clarifications are adde
- …